Skip to content

[DataProcessor] Refactor and unify text/multimodal processor pipeline#7747

Open
luukunn wants to merge 17 commits into
PaddlePaddle:developfrom
luukunn:merge_2
Open

[DataProcessor] Refactor and unify text/multimodal processor pipeline#7747
luukunn wants to merge 17 commits into
PaddlePaddle:developfrom
luukunn:merge_2

Conversation

@luukunn
Copy link
Copy Markdown
Collaborator

@luukunn luukunn commented May 7, 2026

Motivation

本 PR 主要对 FastDeploy 的 Processor 体系进行了合并与重构,统一了文本模型与多模态模型的请求预处理流程,提升了代码的可维护性和扩展性。

通过本次改动,将多模态输入处理逻辑进行模块化拆分,并接入统一的 Processor 框架,为后续支持和维护 Qwen2.5-VL、Qwen3-VL、ERNIE 4.5 VL、PaddleOCR-VL 等模型提供更清晰的实现基础。

Modifications

  • 新增统一的 Processor 实现(fastdeploy/input/processor.py),整合原有文本处理流程。
  • 修改 fastdeploy/input/preprocess.py 的 processor 创建逻辑:
    • 文本模型统一使用 Processor
    • 多模态模型通过挂载对应的 multimodal processor 进行处理
  • 新增多模态处理框架 fastdeploy/input/multimodal/,包括:
    • MMProcessor 抽象基类
    • QwenVLProcessor
    • Qwen3VLProcessor
    • Ernie4_5VLProcessor
    • PaddleOCRVLProcessor
  • 新增多模态公共工具模块 fastdeploy/input/multimodal/common.py,统一封装图像 resize、像素范围判断等公共逻辑。
  • 新增多种模型对应的图像处理器 fastdeploy/input/multimodal/image_processors/
    • QwenImageProcessor
    • Qwen3ImageProcessor
    • AdaptiveImageProcessor
    • PaddleOCRImageProcessor
  • 调整 chat 请求处理流程,支持在多模态场景下更自然地传递 messages
    • 修改 fastdeploy/entrypoints/llm.py
    • 修改 fastdeploy/entrypoints/chat_utils.py
  • 统一 messages -> prompt / multimodal_data 的处理流程,减少文本和多模态路径之间的分叉逻辑。
  • 补充和更新相关测试,包括:
    • processor 初始化与 preprocess 流程测试
    • chat / generation 相关测试
    • multimodal 公共工具测试
    • Qwen / Qwen3 / ERNIE / PaddleOCR 多模态 processor 测试
    • image processor 与缓存相关逻辑测试

Usage or Command

可通过以下命令运行相关测试:

python -m pytest tests/input/multimodal
python -m pytest tests/input/test_preprocess.py
python -m pytest tests/entrypoints/test_chat.py
python -m pytest tests/entrypoints/test_generation.py

Accuracy Tests

本 PR 主要涉及 Processor 架构重构与多模态输入处理流程整理,不直接修改模型前向计算逻辑或算子实现。

因此本次未提供 accuracy 测试结果。

Checklist

  • Add at least a tag in the PR title.
  • Format your code, run pre-commit before commit.
  • Add unit tests. Please write the reason in this PR if no unit tests.
  • Provide accuracy results.
  • If the current PR is submitting to the release branch, make sure the PR has been submitted to the develop branch, then cherry-pick it to the release branch with the [Cherry-Pick] PR tag.

Copilot AI review requested due to automatic review settings May 7, 2026 15:21
@paddle-bot
Copy link
Copy Markdown

paddle-bot Bot commented May 7, 2026

Thanks for your contribution!

This comment was marked as outdated.

PaddlePaddle-bot

This comment was marked as outdated.

@PaddlePaddle-bot
Copy link
Copy Markdown

PaddlePaddle-bot commented May 7, 2026

🤖 Paddle-CI-Agent | ci_status_monitor | 2026-05-13 00:06:01

CI报告基于以下代码生成(30分钟更新一次):


1 任务总览

⚠️ CI 进行中,存在 1 个 Required 失败任务7 个 Required 运行中,请优先处理 Approval 审批问题。

总执行(rerun次数) 总任务 ✅ 通过 ❌ 失败 ⏳ 运行中 ⏸️ 等待中 跳过
38(0) 38 26 1 10 1 0

2 任务状态汇总

2.1 Required任务 : 2/10 通过

必选任务阻塞合并,失败需优先处理。

状态 任务 耗时 根因 修复建议 日志 重跑
Approval 9s PR问题:修改日志行为需 RD 审批 请联系 xyxinyang 或 zyyzghb 完成审批 Job -
Extracted partial CE model tasks to run in CI. / run_ce_cases - 运行中 - Job -
Run Base Tests / base_tests - 运行中 - Job -
Run FastDeploy Unit Tests and Coverage / run_tests_with_coverage - 运行中 - Job -
Run Four Cards Tests / run_4_cards_tests - 运行中 - Job -
Run Stable Tests / stable_tests - 运行中 - Job -
xpu_4cards_case_test / run_xpu_4cards_cases - 运行中 - Job -
xpu_8cards_case_test / run_xpu_8cards_cases - 运行中 - Job -
其余 2 个必选任务通过 - - - - -

2.2 可选任务 — 24/28 通过

可选任务不阻塞合并,失败仅供参考。

状态 任务 耗时 日志 重跑
Run iluvatar Tests / run_iluvatar_cases - Job -
xpu_unit_test / run_xpu_unit_test - Job -
Trigger Jenkins for PR - Job -
⏸️ CI_HPU - - -
其余 24 个可选任务通过 - - -

3 失败详情(仅 required)

Approval — 代码规范(置信度: 高)

Approval

  • 状态: ❌ 失败
  • 错误类型: 代码规范
  • 置信度: 高
  • 根因摘要: PR 新增日志调用,需指定 RD 审批后方可合并
  • 分析器: 通用分析(fallback)

根因详情:
check_approval.sh 检测到 PR diff 中新增了多处日志调用(data_processor_logger.info()data_processor_logger.debug()log_request() 等),触发了 FastDeploy 日志修改审批规则。修改 .info/.debug/.error/log_request 行为需至少一位指定 FastDeploy RD 进行 Approve Review。Exit code 6 表示检测到 1 条待审批的违规项。

关键日志:

Detected log modification in diff:
+        data_processor_logger.info(
+        data_processor_logger.debug(f"smart_resize_paddleocr: ...")
+        log_request(
0. You must have one FastDeploy RD (xyxinyang(zhouchong), zyyzghb(zhangyongyue)) approval for modifying logging behavior (.info/.debug/.error/log_request).
There are 1 approved errors.
##[error]Process completed with exit code 6.

修复建议:

  1. 请 PR 作者联系 xyxinyang(zhouchong)或 zyyzghb(zhangyongyue)在本 PR 上进行 Approve Review
  2. 指定 RD Approve 后,Approval CI 将自动重新触发并通过

修复建议摘要: 请联系 xyxinyang 或 zyyzghb 完成日志变更审批

链接: 查看日志

PaddlePaddle-bot

This comment was marked as outdated.

@codecov-commenter
Copy link
Copy Markdown

codecov-commenter commented May 7, 2026

Codecov Report

❌ Patch coverage is 77.85052% with 406 lines in your changes missing coverage. Please review.
⚠️ Please upload report for BASE (develop@203c7da). Learn more about missing BASE report.

Files with missing lines Patch % Lines
fastdeploy/input/processor.py 41.35% 245 Missing and 40 partials ⚠️
fastdeploy/input/multimodal/qwen_vl.py 85.40% 25 Missing and 9 partials ⚠️
fastdeploy/input/multimodal/ernie4_5_vl.py 92.01% 18 Missing and 3 partials ⚠️
fastdeploy/input/multimodal/mm_processor.py 93.38% 9 Missing and 8 partials ⚠️
fastdeploy/input/multimodal/paddleocr_vl.py 86.40% 13 Missing and 4 partials ⚠️
...tdeploy/input/multimodal/image_processors/ernie.py 91.97% 8 Missing and 5 partials ⚠️
fastdeploy/input/preprocess.py 54.54% 10 Missing ⚠️
...stdeploy/input/multimodal/image_processors/qwen.py 93.02% 3 Missing and 3 partials ⚠️
fastdeploy/entrypoints/llm.py 50.00% 1 Missing and 1 partial ⚠️
fastdeploy/entrypoints/chat_utils.py 50.00% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             develop    #7747   +/-   ##
==========================================
  Coverage           ?   63.58%           
==========================================
  Files              ?      472           
  Lines              ?    65920           
  Branches           ?    10129           
==========================================
  Hits               ?    41915           
  Misses             ?    21150           
  Partials           ?     2855           
Flag Coverage Δ
GPU 72.48% <77.85%> (?)
XPU 6.93% <0.00%> (?)

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Copilot AI review requested due to automatic review settings May 9, 2026 07:40

This comment was marked as outdated.

PaddlePaddle-bot

This comment was marked as outdated.

PaddlePaddle-bot

This comment was marked as outdated.

Copilot AI review requested due to automatic review settings May 9, 2026 11:35

This comment was marked as outdated.

PaddlePaddle-bot

This comment was marked as outdated.

Copilot AI review requested due to automatic review settings May 12, 2026 06:43

This comment was marked as outdated.

@luukunn luukunn changed the title [DataProcessor]merge processor [DataProcessor] Refactor and unify text/multimodal processor pipeline May 12, 2026
PaddlePaddle-bot

This comment was marked as outdated.

PaddlePaddle-bot

This comment was marked as outdated.

Copilot AI review requested due to automatic review settings May 12, 2026 09:20
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 26 out of 26 changed files in this pull request and generated 4 comments.

Comment on lines 202 to 207
if content is None:
parsed_content = []
parsed_content = content
elif isinstance(content, str):
parsed_content = [{"type": "text", "text": content}]
parsed_content = content
else:
parsed_content = [parse_content_part(mm_parser, part) for part in content]
Comment on lines +769 to +771
if isinstance(self.tokenizer, (LlamaTokenizer, Llama3Tokenizer)) and not self.tokenizer.pad_token_id:
return self.tokenizer.eos_token
return self.tokenizer.pad_token_id
Comment on lines +754 to +764
if prompt_token_ids[0] > self.tokenizer.vocab_size:
if not add_prefix_space:
log_request(
level=1,
message="bad_words: '{prompt}' token id {token_id} > vocab_size, skipping",
prompt=prompt,
token_id=prompt_token_ids[0],
)
continue
if prompt_token_ids not in token_ids:
token_ids.extend(prompt_token_ids)
Comment on lines +429 to +436
max_tokens = max_model_len - len(request["prompt_token_ids"])
if request.get("max_tokens") is None:
request["max_tokens"] = max(1, max_tokens)
else:
request["max_tokens"] = min(max_tokens, request["max_tokens"])

# Default reasoning_max_tokens (only for models that need it, e.g. Ernie)
if self.set_default_reasoning_max_tokens and request.get("reasoning_max_tokens") is None:
PaddlePaddle-bot

This comment was marked as outdated.

Copy link
Copy Markdown

@PaddlePaddle-bot PaddlePaddle-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🤖 Paddle-CI-Agent | pr_review | 2026-05-12 23:57:40

📋 Review 摘要

PR 概述:重构并统一文本模型与多模态模型的请求预处理流程,引入统一 Processor 框架并新增多个 VL 多模态处理器。
变更范围fastdeploy/input/(processor、multimodal/)、fastdeploy/entrypoints/(llm.py、chat_utils.py)
影响面 Tag[DataProcessor] [APIServer]

📝 PR 规范检查

标题格式 [DataProcessor] Refactor and unify text/multimodal processor pipeline 符合规范,[DataProcessor] 为官方合法 Tag;描述包含全部必填 Section(Motivation / Modifications / Usage or Command / Accuracy Tests / Checklist),内容完整。✓

问题

级别 文件 概述
🟡 建议 fastdeploy/input/processor.py:557 assert 用于运行时用户输入校验,python -O 下会被跳过
🟡 建议 fastdeploy/input/multimodal/image_processors/ernie.py:148 set_pixels 中用 assert 校验参数,python -O 下失效
🟡 建议 fastdeploy/input/multimodal/image_processors/ernie.py:153 同上,max_pixels 参数校验
❓ 疑问 fastdeploy/entrypoints/chat_utils.py:203 None content 从旧的 [] 改为 None,语义变化需确认兼容性
🟡 建议 旧版处理器文件(base_processor.pytext_processor.pymultimodal_processor.py)仍存在,测试仍引用旧模块,建议明确清理计划

总体评价

整体重构思路清晰,新增多模态处理器框架结构合理,测试覆盖较为完善。存在若干 assert 用于运行时参数校验的问题需修复,并建议明确旧 processor 文件的清理计划。

if not request.get("prompt_token_ids"):
if request.get("prompt"):
prompt = request.get("prompt")
assert isinstance(prompt, str) or (
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 建议 assert 用于运行时用户输入校验,在 python -O 模式下会被跳过,导致后续逻辑收到非法类型后报出难以定位的错误。

建议替换为显式 raise ValueError

if not (isinstance(prompt, str) or (isinstance(prompt, list) and all(isinstance(t, int) for t in prompt))):
    raise ValueError(f"prompt must be a string or a list of integers, but got {type(prompt)}")


def set_pixels(self, min_pixels=None, max_pixels=None, msg=""):
if min_pixels is not None:
assert isinstance(min_pixels, int) and min_pixels >= 0, "min_pixels must be positive int"
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 建议 assert 用于参数合法性校验,python -O 下会被静默跳过。建议改为 raise ValueError

if not (isinstance(min_pixels, int) and min_pixels >= 0):
    raise ValueError("min_pixels must be a non-negative int")

self.min_pixels = min_pixels
self.size["min_pixels"] = int(min_pixels)
if max_pixels is not None:
assert isinstance(max_pixels, int) and max_pixels > 0, "max_pixels must be positive int"
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 建议 同 line 148,max_pixelsassertpython -O 下失效。建议改为:

if not (isinstance(max_pixels, int) and max_pixels > 0):
    raise ValueError("max_pixels must be a positive int")

parsed_content = []
if content is None:
parsed_content = []
parsed_content = content
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

❓ 疑问 旧代码中 content is Noneparsed_content = [](空列表),现在改为保留 None

对于大多数 tokenizer 的 apply_chat_templateNone[] 的处理行为可能不同(例如 Jinja 模板中 {% if content %} vs {% for part in content %})。请确认所有支持模型的 chat template 均能正确处理 content=None 的消息。

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants